基于生成模型的图像无损压缩算法在改善压缩比方面取得了巨大的成功。但是,即使使用最先进的AI加速芯片,它们中大多数的吞吐量也小于1 Mb/s,从而阻止了它们的大多数现实应用应用,通常需要100 MB/s。在本文中,我们提出了PILC,这是一种端到端图像无损压缩框架,使用单个NVIDIA TESLA V100 GPU实现200 Mb/s的压缩和减压,比以前最有效的速度快10倍。为了获得此结果,我们首先开发了一个AI编解码器,该AI编解码器结合了自动回归模型和VQ-VAE,在轻质设置中性能很好,然后我们设计了一个低复杂性熵编码器,可与我们的编解码器配合使用。实验表明,在多个数据集中,我们的框架压缩比PNG高30%。我们认为,这是将AI压缩推向商业用途的重要步骤。
translated by 谷歌翻译
明确的深度生成模型(DGMS),例如VAES和归一化流量,已经显示出有效的数据建模替代因素,以获得无损压缩。然而,DGMS本身通常需要大的存储空间,从而污染通过精确的数据密度估计所带来的优点。为了消除对不同目标数据集的保存单独模型的要求,我们提出了一种从预磨削的深生成模型开始的新颖设置,并将数据批量压缩,同时使用动态系统仅为一个时代调整模型。我们将此设置形式形式为DGMS的单次在线适配(OSOA),无损压缩,并在此设置下提出香草算法。实验结果表明,Vanilla OsoA可以使用一个型号为所有目标节省大量时间与训练定制模型和空间与空间。具有相同的适应步骤数或适应时间,显示Vanilla OsoA可以表现出更好的空间效率,例如47美元的空间,而不是微调预先调整预制模型并保存微调模型。此外,我们展示了OSOA的潜力,并通过显示每个批次和早期停止的多个更新的进一步空间或时间效率来激励更复杂的OSOA算法。
translated by 谷歌翻译
据估计,2020年世界生产了59美元(5.9美元×13} GB $),导致数据存储和传输的巨大成本。幸运的是,深度生成模型的最近进步已经刺激了一类新的所谓的“神经压缩”算法,这在压缩比方面显着优于传统的编解码器。不幸的是,由于其带宽有限,神经压缩加法器的应用很少的商业利益;因此,开发高效框架具有重要的重要性。在本文中,我们讨论了使用正常化流动的无损压缩,这已经表现出了实现高压缩比的很大容量。因此,我们介绍了iflow,一种实现有效的无损压缩的新方法。我们首先提出模块化尺度变换(MST)和基于MST的数值可逆的流动变换的新颖家族。然后我们介绍统一的基础转换系统(UBC),将快速均匀分布编解码器结合到IFLow中,从而实现有效的压缩。 IFLow实现最先进的压缩比率,比其他高性能方案更快5倍。此外,本文提出的技术可用于加速广泛的基于流的算法的编码时间。
translated by 谷歌翻译
现在,存储快速增长的大数据是不可取的,这需要高性能的无损压缩技术。基于可能性的生成模型在无损压缩中获得了成功,其中基于流基的模型在允许与映射映射进行精确的数据似然优化时是可取的。然而,常见的连续流是矛盾的,并且编码方案的离散性,这需要1)对流量模型的严格约束来降低性能或2)编码许多减少效率的诸多的映射误差。在本文中,我们调查了对无损压缩的音量保持流动,并显示了一个没有错误的自由度映射。我们提出了来自总体积保护流的数值可释放的流量(IVPF)。通过在流模型上引入新颖的计算算法,在没有任何数值误差的情况下实现精确的映射映射。我们还提出了一种基于IVPF的无损压缩算法。各种数据集的实验表明,基于IVPF的算法通过轻量级压缩算法实现了最先进的压缩比。
translated by 谷歌翻译
Object detection has been dominated by anchor-based detectors for several years. Recently, anchor-free detectors have become popular due to the proposal of FPN and Focal Loss. In this paper, we first point out that the essential difference between anchor-based and anchor-free detection is actually how to define positive and negative training samples, which leads to the performance gap between them. If they adopt the same definition of positive and negative samples during training, there is no obvious difference in the final performance, no matter regressing from a box or a point. This shows that how to select positive and negative training samples is important for current object detectors. Then, we propose an Adaptive Training Sample Selection (ATSS) to automatically select positive and negative samples according to statistical characteristics of object. It significantly improves the performance of anchor-based and anchor-free detectors and bridges the gap between them. Finally, we discuss the necessity of tiling multiple anchors per location on the image to detect objects. Extensive experiments conducted on MS COCO support our aforementioned analysis and conclusions. With the newly introduced ATSS, we improve stateof-the-art detectors by a large margin to 50.7% AP without introducing any overhead. The code is available at https://github.com/sfzhang15/ATSS.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译